113 research outputs found

    Safer_RAIN: A DEM-based hierarchical filling-&-spilling algorithm for pluvial flood hazard assessment and mapping across large urban areas

    Get PDF
    The increase in frequency and intensity of extreme precipitation events caused by the changing climate (e.g., cloudbursts, rainstorms, heavy rainfall, hail, heavy snow), combined with the high population density and concentration of assets, makes urban areas particularly vulnerable to pluvial flooding. Hence, assessing their vulnerability under current and future climate scenarios is of paramount importance. Detailed hydrologic-hydraulic numerical modeling is resource intensive and therefore scarcely suitable for performing consistent hazard assessments across large urban settlements. Given the steadily increasing availability of LiDAR (Light Detection And Ranging) high-resolution DEMs (Digital Elevation Models), several studies highlighted the potential of fast-processing DEM-based methods, such as the Hierarchical Filling-&-Spilling or Puddle-to-Puddle Dynamic Filling-&-Spilling Algorithms (abbreviated herein as HFSAs). We develop a fast-processing HFSA, named Safer_RAIN, that enables mapping of pluvial flooding in large urban areas by accounting for spatially distributed rainfall input and infiltration processes through a pixel-based Green-Ampt model. We present the first applications of the algorithm to two case studies in Northern Italy. Safer_RAIN output is compared against ground evidence and detailed output from a two-dimensional (2D) hydrologic and hydraulic numerical model (overall index of agreement between Safer_RAIN and 2D benchmark model: sensitivity and specificity up to 71% and 99%, respectively), highlighting potential and limitations of the proposed algorithm for identifying pluvial flood-hazard hotspots across large urban environments

    The Price of Defense

    Get PDF
    We consider a game on a graph G= ⟨ V, E⟩ with two confronting classes of randomized players: νattackers, who choose vertices and seek to minimize the probability of getting caught, and a single defender, who chooses edges and seeks to maximize the expected number of attackers it catches. In a Nash equilibrium, no player has an incentive to unilaterally deviate from her randomized strategy. The Price of Defense is the worst-case ratio, over all Nash equilibria, of ν over the expected utility of the defender at a Nash equilibrium. We orchestrate a strong interplay of arguments from Game Theory and Graph Theory to obtain both general and specific results in the considered setting: (1) Via a reduction to a Two-Players, Constant-Sum game, we observe that an arbitrary Nash equilibrium is computable in polynomial time. Further, we prove a general lower bound of |V|2 on the Price of Defense. We derive a characterization of graphs with a Nash equilibrium attaining this lower bound, which reveals a promising connection to Fractional Graph Theory; thereby, it implies an efficient recognition algorithm for such Defense-Optimal graphs. (2) We study some specific classes of Nash equilibria, both for their computational complexity and for their incurred Price of Defense. The classes are defined by imposing structure on the players’ randomized strategies: either graph-theoretic structure on the supports, or symmetry and uniformity structure on the probabilities. We develop novel graph-theoretic techniques to derive trade-offs between computational complexity and the Price of Defense for these classes. Some of the techniques touch upon classical milestones of Graph Theory; for example, we derive the first game-theoretic characterization of König-Egerváry graphs as graphs admitting a Matching Nash equilibrium

    Lower Bounds for Encrypted Multi-Maps and Searchable Encryption in the Leakage Cell Probe Model

    Get PDF
    Encrypted multi-maps (EMMs) enable clients to outsource the storage of a multi-map to a potentially untrusted server while maintaining the ability to perform operations in a privacy-preserving manner. EMMs are an important primitive as they are an integral building block for many practical applications such as searchable encryption and encrypted databases. In this work, we formally examine the tradeoffs between privacy and efficiency for EMMs. Currently, all known dynamic EMMs with constant overhead reveal if two operations are performed on the same key or not that we denote as the global key-equality pattern\mathit{global\ key\text{-}equality\ pattern}. In our main result, we present strong evidence that the leakage of the global key-equality pattern is inherent for any dynamic EMM construction with O(1)O(1) efficiency. In particular, we consider the slightly smaller leakage of decoupled key-equality pattern\mathit{decoupled\ key\text{-}equality\ pattern} where leakage of key-equality between update and query operations is decoupled and the adversary only learns whether two operations of the same type\mathit{same\ type} are performed on the same key or not. We show that any EMM with at most decoupled key-equality pattern leakage incurs Ω(logn)\Omega(\log n) overhead in the leakage cell probe model\mathit{leakage\ cell\ probe\ model}. This is tight as there exist ORAM-based constructions of EMMs with logarithmic slowdown that leak no more than the decoupled key-equality pattern (and actually, much less). Furthermore, we present stronger lower bounds that encrypted multi-maps leaking at most the decoupled key-equality pattern but are able to perform one of either the update or query operations in the plaintext still require Ω(logn)\Omega(\log n) overhead. Finally, we extend our lower bounds to show that dynamic, response-hiding\mathit{response\text{-}hiding} searchable encryption schemes must also incur Ω(logn)\Omega(\log n) overhead even when one of either the document updates or searches may be performed in the plaintext

    Lower Bounds for Multi-Server Oblivious RAMs

    Get PDF
    In this work, we consider the construction of oblivious RAMs (ORAM) in a setting with multiple servers and the adversary may corrupt a subset of the servers. We present an Ω(logn)\Omega(\log n) overhead lower bound for any kk-server ORAM that limits any PPT adversary to distinguishing advantage at most 1/4k1/4k when only one server is corrupted. In other words, if one insists on negligible distinguishing advantage, then multi-server ORAMs cannot be faster than single-server ORAMs even with polynomially many servers of which only one unknown server is corrupted. Our results apply to ORAMs that may err with probability at most 1/1281/128 as well as scenarios where the adversary corrupts larger subsets of servers. We also extend our lower bounds to other important data structures including oblivious stacks, queues, deques, priority queues and search trees

    Cryptology and Network Security

    No full text

    Limits of Preprocessing for Single-Server PIR

    No full text

    5th italian conference on Algorithms and Complexity

    No full text
    Gli Atti del Convegno sono pubblicati da Springer Verlag su LNCS 265

    Third International Conference Security in Communication Networks

    No full text
    This book contains the papers accepted for publication, after a peer-review process, to the Third International Conference on Security in Communication Networks that was held in Amalfi (SA) on September 12-13, 2002. Some of the topics covered in this venue are Digital signatures, zero knowledge proof systems, secret sharing schemes, cryptanalysis
    corecore